Synaptic Learning Rules and Sparse Coding in a Model Sensory System

نویسندگان

  • Luca A. Finelli
  • Seth Haney
  • Maxim Bazhenov
  • Mark Stopfer
  • Terrence J. Sejnowski
چکیده

Neural circuits exploit numerous strategies for encoding information. Although the functional significance of individual coding mechanisms has been investigated, ways in which multiple mechanisms interact and integrate are not well understood. The locust olfactory system, in which dense, transiently synchronized spike trains across ensembles of antenna lobe (AL) neurons are transformed into a sparse representation in the mushroom body (MB; a region associated with memory), provides a well-studied preparation for investigating the interaction of multiple coding mechanisms. Recordings made in vivo from the insect MB demonstrated highly specific responses to odors in Kenyon cells (KCs). Typically, only a few KCs from the recorded population of neurons responded reliably when a specific odor was presented. Different odors induced responses in different KCs. Here, we explored with a biologically plausible model the possibility that a form of plasticity may control and tune synaptic weights of inputs to the mushroom body to ensure the specificity of KCs' responses to familiar or meaningful odors. We found that plasticity at the synapses between the AL and the MB efficiently regulated the delicate tuning necessary to selectively filter the intense AL oscillatory output and condense it to a sparse representation in the MB. Activity-dependent plasticity drove the observed specificity, reliability, and expected persistence of odor representations, suggesting a role for plasticity in information processing and making a testable prediction about synaptic plasticity at AL-MB synapses.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common pri...

متن کامل

Nonlinear Hebbian learning

The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common pri...

متن کامل

Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons

The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's S...

متن کامل

Adaptive compressed sensing - A new class of self-organizing coding models for neuroscience

Sparse coding networks, which utilize unsupervised learning to maximize coding efficiency, have successfully reproduced response properties found in primary visual cortex [1]. However, conventional sparse coding models require that the coding circuit can fully sample the sensory data in a one-to-one fashion, a requirement not supported by experimental data from the thalamo-cortical projection. ...

متن کامل

Image Classification via Sparse Representation and Subspace Alignment

Image representation is a crucial problem in image processing where there exist many low-level representations of image, i.e., SIFT, HOG and so on. But there is a missing link across low-level and high-level semantic representations. In fact, traditional machine learning approaches, e.g., non-negative matrix factorization, sparse representation and principle component analysis are employed to d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • PLoS Computational Biology

دوره 4  شماره 

صفحات  -

تاریخ انتشار 2008